Our Works on BNNs
15
FIGURE 1.8
Our research agenda on BNNs.
lem in a theoretical framework. In ReBNNs, the reconstruction loss introduced in MCN can
theoretically decrease the gradient oscillation by changing its balanced factor.
Although the performance of BNNs has improved dramatically in the last three years,
the gap remains large compared to that of their full-precision counterparts. One possible
solution could come from the neural architecture search (NAS), which has led to state-of-
the-art performance in many learning tasks. Neural architecture search (NAS) has led to
state-of-the-art performance on many learning tasks. A natural idea is introducing NAS
into BNNs, leading to our binarized neural architecture search (BNAS) [35]. In our BNAS
framework, we show that the BNNs obtained by BNAS can outperform conventional models
by a large margin. While BNAS only focuses on kernel binarization to achieve 1-bit CNNs,
our CP-NAS [304] advances this work to binarize both weights and activations. In CP-NAS,
a Child-Parent (CP) model is introduced to a differentiable NAS to search the binarized
architecture (child) under the supervision of a full precision model (Parent). Based on CP-
NAS, we achieve much better performance than conventional binarized neural networks.
Our research agenda on BNNs is shown in Fig. 1.8.